Close Menu

    Subscribe to Updates

    Get the latest tech news from Tallwire.

      What's Hot

      Anthropic Code Leak Raises Questions About AI Security and Industry Oversight

      April 8, 2026

      NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

      April 8, 2026

      The Rise Of Agentic AI Signals A Shift From Tools To Autonomous Digital Actors

      April 8, 2026
      Facebook X (Twitter) Instagram
      • Tech
      • AI
      • Get In Touch
      Facebook X (Twitter) LinkedIn
      TallwireTallwire
      • Tech

        NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

        April 8, 2026

        OpenAI Expands Influence With Strategic TBPN Media Acquisition

        April 8, 2026

        Cybersecurity Veteran Turns Focus To Drone Hacking After Decades Battling Malware

        April 6, 2026

        Anonymous Social App Surges In Saudi Arabia, Testing Limits Of Digital Freedom

        April 6, 2026

        Peter Thiel’s Bold Ag-Tech Gamble Signals High-Tech Disruption of Traditional Ranching

        April 6, 2026
      • AI

        Anthropic Code Leak Raises Questions About AI Security and Industry Oversight

        April 8, 2026

        The Rise Of Agentic AI Signals A Shift From Tools To Autonomous Digital Actors

        April 8, 2026

        AI Chatbots Draw Scrutiny As Teens Engage In Intimate Roleplay And Emotional Dependency

        April 8, 2026

        Ai-Powered Startup Signals Rise Of One-Person Billion-Dollar Companies

        April 8, 2026

        OpenAI Secures Historic $122 Billion Funding Round at $852 Billion Valuation

        April 7, 2026
      • Security

        Anthropic Code Leak Raises Questions About AI Security and Industry Oversight

        April 8, 2026

        DeFi Platform Drift Halts Operations After Multi-Million Dollar Crypto Hack

        April 7, 2026

        Fake WhatsApp App Exposes Users To Government Spyware Operation

        April 7, 2026

        ICE Deploys Controversial Spyware Tool In Drug Trafficking Investigations

        April 7, 2026

        Telehealth Firm Discloses Breach Amid Rising Digital Health Vulnerabilities

        April 6, 2026
      • Health

        European Crackdown Targets Social Media’s Impact on Children

        April 8, 2026

        AI Chatbots Draw Scrutiny As Teens Engage In Intimate Roleplay And Emotional Dependency

        April 8, 2026

        Australia Moves To Curb Social Media Addiction Among Youth With Expanded Under-16 Ban

        April 5, 2026

        Australia’s eSafety Regulator Warns Big Tech As Teens Circumvent Social Media Restrictions

        April 5, 2026

        Meta Finally Held Accountable For Harming Teens, But Real Reform Remains Uncertain

        April 2, 2026
      • Science

        Peter Thiel’s Bold Ag-Tech Gamble Signals High-Tech Disruption of Traditional Ranching

        April 6, 2026

        White House Tech Advisor David Sacks Steps Down To Lead Presidential Science Advisory

        March 31, 2026

        Blue Origin’s Orbital Data Center Push Signals New Frontier in Tech Infrastructure

        March 27, 2026

        Quantum Cryptography Pioneers Awarded Computing’s Highest Honor

        March 25, 2026

        Amazon’s New Robot Looks Like a Toy. That Might Be the Point.

        March 25, 2026
      • Tech

        Peter Thiel’s Bold Ag-Tech Gamble Signals High-Tech Disruption of Traditional Ranching

        April 6, 2026

        Zuckerberg Quietly Offers Musk Support As Tech Titans Align Around Government Power

        April 4, 2026

        White House Tech Advisor David Sacks Steps Down To Lead Presidential Science Advisory

        March 31, 2026

        Another Billionaire Signals Exit As California’s Taxes Drives Out High-Profile Entrepreneurs

        March 28, 2026

        Bezos Eyes $100 Billion War Chest To Rewire Legacy Industry With AI

        March 28, 2026
      TallwireTallwire
      Home»Tech»New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption
      Tech

      New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption

      6 Mins Read
      Facebook Twitter Pinterest LinkedIn Tumblr Email
      New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption
      New Side-Channel Threat: “Whisper Leak” Can Hear Conversation Topics Despite Encryption
      Share
      Facebook Twitter LinkedIn Pinterest Email

      Security researchers at Microsoft have disclosed a new vulnerability, dubbed “Whisper Leak,” that allows a passive adversary to infer the topic of encrypted conversations with AI language-models simply by monitoring packet sizes and timing in streaming responses—even though the content itself remains encrypted under TLS. In experiments conducted across 28 major models from providers including OpenAI, Mistral AI and xAI the researchers trained classifiers (LightGBM, Bi-LSTM, BERT-based) and found that many models achieved over 98 % AUPRC (area under precision-recall curve) in distinguishing sensitive-topic prompts (e.g., “money laundering”) from general traffic. The finding underscores that encryption alone—while protecting message contents—is insufficient to hide metadata in streaming AI services, with privacy risks elevated especially for users operating in untrusted networks, authoritarian jurisdictions, or on shared Wi-Fi. Microsoft and partners have begun rolling out mitigations such as random padding, token-batching, and fake packet injection, but these reduce rather than eliminate the risk. 

      Sources: Hacker News, Microsoft

      Key Takeaways

      – Adversaries who can observe encrypted AI-model traffic (e.g., network ISPs, Wi-Fi eavesdroppers, local adversaries) can train models to infer conversation topics purely from packet-size and timing metadata—even though message content is encrypted.

      – The Whisper Leak threat has been validated across a wide span of commercial LLM services, with many achieving very high classification accuracy (> 98 %) under controlled conditions; and even under realistic noise ratios (10,000 : 1 benign : target), high precision (100 %) at modest recall (5–20 %) was demonstrated.

      – Mitigation strategies such as random response padding, batching of tokens, and packet-injection help reduce the risk, but none fully eliminate it—so users and enterprises should assume topic-leakage remains a live threat when using streaming AI models, especially on untrusted networks.

      In-Depth

      In the evolving world of generative AI and large-language-model (LLM) services, confidentiality has largely focused on encrypting the data in transit and at rest. But the new research from Microsoft flips that assumption: even when the content of the conversation is protected via TLS or HTTPS, the metadata associated with streaming LLM responses—specifically packet sizes and inter-packet timing—can betray the topic of the user’s prompt. This attack class, labeled “Whisper Leak,” arises because many LLM services stream their output token by token (or in small batches) as soon as each token is generated, and that streaming behavior reveals consistent and learnable patterns in encrypted traffic. 

      The threat model is quite practical: A passive on-path observer (for example, someone controlling the local Wi-Fi network, a government monitoring ISP traffic, or an insider monitoring corporate VPN egress) records encrypted sessions between a user and an AI service. They cannot decrypt the traffic, yet by extracting sequences of packet length and timing features, they feed them into a trained classifier to determine whether the user’s prompt is about a sensitive subject (e.g., finance, political dissent, health issues). Microsoft’s proof-of-concept used binary classification on a “target topic vs background” dataset and found that many leading models yielded near-perfect results: classifiers often achieved > 98 % AUPRC and for 17 of 28 models tested, achieved 100 % precision at 5-20 % recall in a 10,000:1 noise scenario. 

      What makes the result especially significant for real-world risk is the fact that streaming is a default feature in many AI-chat platforms and APIs: users want immediate responses, so the service emits tokens as they are generated. That behavior creates regular, repeated patterns in data length and timing that are exposed even after encryption, because TLS does not hide packet size or timing—it only hides the payload contents. As the Microsoft blog explains: “While TLS encrypts content, metadata such as packet sizes and timings remain observable” and hence exploitable. 

      It’s also important to note that the problem is not purely academic. Many of the models tested in the study were from commercial vendors and found vulnerable. The researchers evaluated 28 models from major providers (including Alibaba Qwen3, DeepSeek, Meta Llama 3.3, Microsoft Phi-4, Mistral Large-2, OpenAI GPT-OSS-20b, Zhipu AI GLM 4.5) and found substantial side-channel vulnerability across the board. 

      In terms of mitigation, the researchers evaluated three main techniques: (1) random padding of responses (adding variable-length dummy tokens); (2) token batching (sending multiple tokens in a single network packet, thereby reducing granularity); and (3) packet injection (adding spurious packets to mask true patterns). Their experiments show these reduce—but do not eliminate—the attack’s effectiveness. For example, even with random padding the classifier AUPRC might drop from ~97.5 % to ~92.9 % depending on the model. For some use-cases (e.g., high-sensitivity prompts on untrusted networks) the residual risk may still be unacceptable.

      From a conservative or risk-aware vantage point, there are several implications that users, enterprises, and policy makers should consider:

      – Encryption of content is necessary but not sufficient for protecting privacy when using streaming AI services: adversaries still glean information from “how you speak” (i.e., the traffic pattern).

      – Deploying AI chat or query services into sensitive domains (healthcare, legal advice, corporate secrets, activism) on untrusted networks (public Wi-Fi, state-monitored ISPs, shared infrastructure) introduces a significant metadata-leakage risk.

      – Enterprises that integrate LLMs into internal workflows (e.g., for content creation, research, compliance) should insist on providers offering traffic-obfuscation features, batch-streaming options, or better yet non-streaming modes, and consider executing AI inference within fully controlled network environments.

      – User education remains critical: tell users to avoid discussing highly sensitive topics via streaming models while on insecure networks, use VPNs where feasible, and prefer AI services that explicitly document mitigation of streaming side-channels. Microsoft emphasizes exactly this: if you must ask about sensitive topics on an untrusted network, consider using non-streaming models or switching to a provider that has implemented the countermeasures. 

      – Policymakers and standards-bodies should arguably broaden the threat model for AI systems to include metadata side-channels as a recognized privacy risk—not just payload encryption.

      In short, Whisper Leak highlights that as AI technologies grow ever more integrated into sensitive workflows, the adversary model must evolve. It’s no longer enough to secure “what is said”; how it’s said (and how you receive responses) matters too. Streaming APIs offer responsiveness and low latency—but that speed comes at a cost. For those concerned about confidentiality, the tradeoffs warrant reconsideration: opting for non-streaming modes, requiring providers to obscure traffic patterns, or limiting AI usage to trusted network contexts. The adversary that cannot break the encryption still may get the story by observing timing and packet size alone—and for conservative organizations and individuals, that’s a privacy hazard that can’t be ignored.

      In the broader context of your work as a content-creator and media producer, where you likely handle proprietary scripts, unpublished interview material, or early promotional concepts through AI-powered tools, acknowledging these risks is prudent. If you integrate an LLM service for research, scripting, or social-media draft generation, ask yourself: Are you on an open network? Does the service stream responses? Does the provider batch tokens or pad responses to protect against topic inference? And if not—consider mitigating steps, including local inference (if viable), or encrypted tunnels with padding.

      Ultimately, the whisper you never spoke may still get heard.

      Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
      Previous ArticleNew Passkey Technology Set to Outpace Traditional Passwords in Cyber-Security Shift
      Next Article New Streaming Channel Launches To Give Viewers A Peek Into City Council Meetings

      Related Posts

      NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

      April 8, 2026

      OpenAI Expands Influence With Strategic TBPN Media Acquisition

      April 8, 2026

      Cybersecurity Veteran Turns Focus To Drone Hacking After Decades Battling Malware

      April 6, 2026

      Anonymous Social App Surges In Saudi Arabia, Testing Limits Of Digital Freedom

      April 6, 2026
      Add A Comment
      Leave A Reply Cancel Reply

      Editors Picks

      NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

      April 8, 2026

      OpenAI Expands Influence With Strategic TBPN Media Acquisition

      April 8, 2026

      Cybersecurity Veteran Turns Focus To Drone Hacking After Decades Battling Malware

      April 6, 2026

      Anonymous Social App Surges In Saudi Arabia, Testing Limits Of Digital Freedom

      April 6, 2026
      Popular Topics
      Tim Cook Software Samsung SpaceX UAE Tech spotlight Tesla Satya Nadella Series A Viral Sundar Pichai Startup trending Taiwan Tech Series B Robotics Ransomware Quantum computing Sam Altman Tesla Cybertruck
      Major Tech Companies
      • Apple News
      • Google News
      • Meta News
      • Microsoft News
      • Amazon News
      • Samsung News
      • Nvidia News
      • OpenAI News
      • Tesla News
      • AMD News
      • Anthropic News
      • Elbit News
      AI & Emerging Tech
      • AI Regulation News
      • AI Safety News
      • AI Adoption
      • Quantum Computing News
      • Robotics News
      Key People
      • Sam Altman News
      • Jensen Huang News
      • Elon Musk News
      • Mark Zuckerberg News
      • Sundar Pichai News
      • Tim Cook News
      • Satya Nadella News
      • Mustafa Suleyman News
      Global Tech & Policy
      • Israel Tech News
      • India Tech News
      • Taiwan Tech News
      • UAE Tech News
      Startups & Emerging Tech
      • Series A News
      • Series B News
      • Startup News
      Tallwire
      Facebook X (Twitter) LinkedIn Threads Instagram RSS
      • Tech
      • Entertainment
      • Business
      • Government
      • Academia
      • Transportation
      • Legal
      • Press Kit
      © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

      Type above and press Enter to search. Press Esc to cancel.