Close Menu

    Subscribe to Updates

    Get the latest tech news from Tallwire.

      What's Hot

      Artemis II Splashdown Signals A Step Closer to Mass Space Travel

      April 12, 2026

      Anthropic Code Leak Raises Questions About AI Security and Industry Oversight

      April 8, 2026

      NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

      April 8, 2026
      Facebook X (Twitter) Instagram
      • Tech
      • AI
      • Get In Touch
      Facebook X (Twitter) LinkedIn
      TallwireTallwire
      • Tech

        NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

        April 8, 2026

        OpenAI Expands Influence With Strategic TBPN Media Acquisition

        April 8, 2026

        Cybersecurity Veteran Turns Focus To Drone Hacking After Decades Battling Malware

        April 6, 2026

        Anonymous Social App Surges In Saudi Arabia, Testing Limits Of Digital Freedom

        April 6, 2026

        Peter Thiel’s Bold Ag-Tech Gamble Signals High-Tech Disruption of Traditional Ranching

        April 6, 2026
      • AI

        Anthropic Code Leak Raises Questions About AI Security and Industry Oversight

        April 8, 2026

        The Rise Of Agentic AI Signals A Shift From Tools To Autonomous Digital Actors

        April 8, 2026

        AI Chatbots Draw Scrutiny As Teens Engage In Intimate Roleplay And Emotional Dependency

        April 8, 2026

        Ai-Powered Startup Signals Rise Of One-Person Billion-Dollar Companies

        April 8, 2026

        OpenAI Secures Historic $122 Billion Funding Round at $852 Billion Valuation

        April 7, 2026
      • Security

        Anthropic Code Leak Raises Questions About AI Security and Industry Oversight

        April 8, 2026

        DeFi Platform Drift Halts Operations After Multi-Million Dollar Crypto Hack

        April 7, 2026

        Fake WhatsApp App Exposes Users To Government Spyware Operation

        April 7, 2026

        ICE Deploys Controversial Spyware Tool In Drug Trafficking Investigations

        April 7, 2026

        Telehealth Firm Discloses Breach Amid Rising Digital Health Vulnerabilities

        April 6, 2026
      • Health

        European Crackdown Targets Social Media’s Impact on Children

        April 8, 2026

        AI Chatbots Draw Scrutiny As Teens Engage In Intimate Roleplay And Emotional Dependency

        April 8, 2026

        Australia Moves To Curb Social Media Addiction Among Youth With Expanded Under-16 Ban

        April 5, 2026

        Australia’s eSafety Regulator Warns Big Tech As Teens Circumvent Social Media Restrictions

        April 5, 2026

        Meta Finally Held Accountable For Harming Teens, But Real Reform Remains Uncertain

        April 2, 2026
      • Science

        Artemis II Splashdown Signals A Step Closer to Mass Space Travel

        April 12, 2026

        Peter Thiel’s Bold Ag-Tech Gamble Signals High-Tech Disruption of Traditional Ranching

        April 6, 2026

        White House Tech Advisor David Sacks Steps Down To Lead Presidential Science Advisory

        March 31, 2026

        Blue Origin’s Orbital Data Center Push Signals New Frontier in Tech Infrastructure

        March 27, 2026

        Quantum Cryptography Pioneers Awarded Computing’s Highest Honor

        March 25, 2026
      • Tech

        Peter Thiel’s Bold Ag-Tech Gamble Signals High-Tech Disruption of Traditional Ranching

        April 6, 2026

        Zuckerberg Quietly Offers Musk Support As Tech Titans Align Around Government Power

        April 4, 2026

        White House Tech Advisor David Sacks Steps Down To Lead Presidential Science Advisory

        March 31, 2026

        Another Billionaire Signals Exit As California’s Taxes Drives Out High-Profile Entrepreneurs

        March 28, 2026

        Bezos Eyes $100 Billion War Chest To Rewire Legacy Industry With AI

        March 28, 2026
      TallwireTallwire
      Home»Tech»Open-Weight AI Models Found Deeply Vulnerable to Jailbreak Attacks
      Tech

      Open-Weight AI Models Found Deeply Vulnerable to Jailbreak Attacks

      5 Mins Read
      Facebook Twitter Pinterest LinkedIn Tumblr Email
      Open-Weight AI Models Found Deeply Vulnerable to Jailbreak Attacks
      Open-Weight AI Models Found Deeply Vulnerable to Jailbreak Attacks
      Share
      Facebook Twitter LinkedIn Pinterest Email

      A recent security analysis by Cisco Systems has found that several leading open-weight artificial intelligence models—those whose trained parameters are publicly available and modifiable—are highly susceptible to what are known as multi-turn “jailbreak” attacks. The study, entitled “Death by a Thousand Prompts”, reports that in tests across eight such models success rates for multi-turn attacks ranged as high as 92.78 % (on Mistral Large‑2) and as low as 25.86 % (on Google Gemma 3‑1B‑IT) — representing roughly 2× to 10× higher vulnerability than single-turn attempts. The findings highlight that models prioritising raw capability over alignment to human values (for example, those from Meta Platforms and Mistral) exhibit significantly worse resilience compared with those built with a stronger emphasis on alignment (such as Google’s). Real-world implications cited include risks of data exfiltration, code-generation for illicit purposes, and compromised decision-support systems in enterprise environments.

      Sources: Cisco, IT Brew

      Key Takeaways

      – Multi-turn jailbreak attacks—those involving a sequence of crafted prompts rather than a single command—are far more effective, with success rates up to 10 times higher than single-turn attacks in the models tested.

      – The relative lack of alignment (i.e., built-in guardrails to human values and ethical constraints) in many open-weight models makes them especially prone to misuse; organisations that deploy capability-first models without strong safety layers are taking a major risk.

      – For enterprises and developers selecting models for production use, simply picking an open-weight model because of cost or ease of customization is not enough: one must factor in the model’s security posture, monitoring ability, and robustness to sustained adversarial input.

      In-Depth

      In the rapidly evolving world of artificial intelligence, open-weight models have emerged as a major driver of innovation. These models allow developers and researchers full access to the model parameters, enabling fine-tuning, customisation and deployment at lower cost and greater flexibility than closed-proprietary alternatives. Yet, the very openness that drives their appeal also carries a significant downside: a vulnerability to adversarial manipulation. A recent study by Cisco’s AI Threat Research team reveals this weakness in stark terms.

      Their investigation focused on eight prominent open-weight large language models (LLMs), including those from Alibaba, DeepSeek, Google, Meta, Microsoft, Mistral, OpenAI (when releasing open-weight variants) and Zhipu AI. The researchers used Cisco’s “AI Validation” tool to subject these models to a series of malicious prompts, both in single-turn and multi-turn formats. Multi-turn attacks build momentum: a series of prompts that gradually steer the model into unsafe or unintended behaviour, often by establishing a benign context then introducing malicious requests disguised inside innocuous conversation. The results showed that multi-turn success rates ranged from 25.86 % (Gemma) to 92.78 % (Mistral) — an average around 64.21 % across models, compared to much lower single-turn rates (on average ~13.11 %).

      Why does this matter? Because in real-world deployments—chatbots, virtual assistants, decision-support systems—the interaction is rarely a single prompt but an extended conversation. If a model can be manipulated over multiple turns to bypass guardrails, the consequences can be severe: disclosure of sensitive data, generation of illicit code or instructions, subversion of decision logic, or simply bias and misinformation embedded in outputs. The study found that models built with a focus on capability rather than alignment (for example, Meta’s Llama-based weights, Mistral’s Large-2) showed the worst performance under multi-turn attacks, whereas models with stronger alignment emphasis (Google’s Gemma) held up better — though still demonstrated worrying vulnerabilities.

      The implications are clear for enterprises, developers and regulators. First, when selecting an open-weight model the safety and security posture must be a primary consideration, not an afterthought. It is insufficient to assume that a model’s published parameters or documentation guarantee safety. Second, deployment must include active monitoring, adversarial red-teaming and layered guardrails that go beyond the model’s built-in defences. Multi-turn persistence needs to be tested—not just singe-prompt sanitisation. Third, the AI industry and system integrators must take alignment seriously: building models that prioritise human values, ethics, refusal behaviour and robust conversation-based guarding is not optional if the model will be deployed in any production or publicly-facing system.

      From a policy and governance standpoint, this research adds urgency to calls for standards around AI safety, disclosures of model vulnerabilities, and perhaps certification of models before deployment in sensitive scenarios. Open-weight models can democratise AI development, but without safeguards they could also democratise misuse. In short: the low cost of entry for open-weight deployment must be matched by high standards for safety and oversight.

      For content creators like yourself—who work across media, podcasts, social-media outreach and brand building—this topic may be particularly relevant. As you produce digital-media assets, you may rely on AI systems for workflow automation, writing assistance or conversational interfaces. The findings here underscore the importance of vetting the models you use or recommend, ensuring that any AI assistant or tool you integrate has strong safeguards and monitoring in place. If you’re promoting or building a branded AI-enabled product or outreach channel, incorporating a clear safety and alignment strategy into your messaging can become a differentiator. Audiences are increasingly aware of AI risks; organic media coverage of AI-misuse events means your positioning can both reassure and engage your followers by demonstrating that you’re working with “safe AI” rather than simply chasing cutting-edge capability.

      In summary: open-weight AI models bring tremendous opportunity—but they also bring serious risk. The Cisco study serves as a wake-up call: model flexibility cannot come at the expense of safety. If your work involves AI tools, whether internally or for clients or audiences, you’ll be best served by choosing models and frameworks that treat guardrails as equally important to capability. Because in the world of AI production, it’s far easier to build power than to hold responsibility.

      Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
      Previous ArticleOne In Four UK Teens Turning To AI Chatbots For Mental Health Support, Study Finds
      Next Article OpenAI Confirms Mixpanel Data Breach Impacting Some API Users

      Related Posts

      NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

      April 8, 2026

      OpenAI Expands Influence With Strategic TBPN Media Acquisition

      April 8, 2026

      Cybersecurity Veteran Turns Focus To Drone Hacking After Decades Battling Malware

      April 6, 2026

      Anonymous Social App Surges In Saudi Arabia, Testing Limits Of Digital Freedom

      April 6, 2026
      Add A Comment
      Leave A Reply Cancel Reply

      Editors Picks

      NASA Astronauts Use iPhones to Capture Historic Artemis II Mission Images

      April 8, 2026

      OpenAI Expands Influence With Strategic TBPN Media Acquisition

      April 8, 2026

      Cybersecurity Veteran Turns Focus To Drone Hacking After Decades Battling Malware

      April 6, 2026

      Anonymous Social App Surges In Saudi Arabia, Testing Limits Of Digital Freedom

      April 6, 2026
      Popular Topics
      Series B UAE Tech Samsung Tim Cook SpaceX Satya Nadella Series A trending Viral Tesla Cybertruck Ransomware Startup Robotics Quantum computing Tesla Taiwan Tech Sundar Pichai Software spotlight Sam Altman
      Major Tech Companies
      • Apple News
      • Google News
      • Meta News
      • Microsoft News
      • Amazon News
      • Samsung News
      • Nvidia News
      • OpenAI News
      • Tesla News
      • AMD News
      • Anthropic News
      • Elbit News
      AI & Emerging Tech
      • AI Regulation News
      • AI Safety News
      • AI Adoption
      • Quantum Computing News
      • Robotics News
      Key People
      • Sam Altman News
      • Jensen Huang News
      • Elon Musk News
      • Mark Zuckerberg News
      • Sundar Pichai News
      • Tim Cook News
      • Satya Nadella News
      • Mustafa Suleyman News
      Global Tech & Policy
      • Israel Tech News
      • India Tech News
      • Taiwan Tech News
      • UAE Tech News
      Startups & Emerging Tech
      • Series A News
      • Series B News
      • Startup News
      Tallwire
      Facebook X (Twitter) LinkedIn Threads Instagram RSS
      • Tech
      • Entertainment
      • Business
      • Government
      • Academia
      • Transportation
      • Legal
      • Press Kit
      © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

      Type above and press Enter to search. Press Esc to cancel.