Whisper Leak: Encrypted AI chats can still reveal their topic
Encrypted LLM chats can still leak their topic
Whisper Leak shows that even with TLS, streaming responses create telltale packet size and timing patterns. Across 28 popular models, these fingerprints let an eavesdropper classify prompt topics with near-perfect accuracy, flagging sensitive ones like "money laundering" and even recovering 5-20% of matching chats amid heavy noise.
Why it matters: a network observer (ISP, employer, government, or someone on the same Wi-Fi) could infer what you ask an AI without seeing the text.
Mitigations (not a full fix)
- Random padding, token batching, and packet injection reduce leakage, but none stop it.
- Providers have begun deploying defenses after responsible disclosure.
What you can do now: prefer non-streaming replies when possible, use trusted networks and a VPN, and avoid highly sensitive prompts on untrusted connections until stronger protections arrive.
Paper: http://arxiv.org/abs/2511.03675v1
Paper: http://arxiv.org/abs/2511.03675v1
Register: https://www.AiFeta.com
AI Security Privacy LLM Cybersecurity Metadata SideChannel Research NetworkTraffic